16 research outputs found
Localized Data Work as a Precondition for Data-Centric ML: A Case Study of Full Lifecycle Crop Disease Identification in Ghana
The Ghana Cashew Disease Identification with Artificial Intelligence (CADI
AI) project demonstrates the importance of sound data work as a precondition
for the delivery of useful, localized datacentric solutions for public good
tasks such as agricultural productivity and food security. Drone collected data
and machine learning are utilized to determine crop stressors. Data, model and
the final app are developed jointly and made available to local farmers via a
desktop application
Generative Fractional Diffusion Models
We generalize the continuous time framework for score-based generative models
from an underlying Brownian motion (BM) to an approximation of fractional
Brownian motion (FBM). We derive a continuous reparameterization trick and the
reverse time model by representing FBM as a stochastic integral over a family
of Ornstein-Uhlenbeck processes to define generative fractional diffusion
models (GFDM) with driving noise converging to a non-Markovian process of
infinite quadratic variation. The Hurst index of FBM enables
control of the roughness of the distribution transforming path. To the best of
our knowledge, this is the first attempt to build a generative model upon a
stochastic process with infinite quadratic variation
DiffInfinite: Large Mask-Image Synthesis via Parallel Random Patch Diffusion in Histopathology
We present DiffInfinite, a hierarchical diffusion model that generates
arbitrarily large histological images while preserving long-range correlation
structural information. Our approach first generates synthetic segmentation
masks, subsequently used as conditions for the high-fidelity generative
diffusion process. The proposed sampling method can be scaled up to any desired
image size while only requiring small patches for fast training. Moreover, it
can be parallelized more efficiently than previous large-content generation
methods while avoiding tiling artefacts. The training leverages classifier-free
guidance to augment a small, sparsely annotated dataset with unlabelled data.
Our method alleviates unique challenges in histopathological imaging practice:
large-scale information, costly manual annotation, and protective data
handling. The biological plausibility of DiffInfinite data is validated in a
survey by ten experienced pathologists as well as a downstream segmentation
task. Furthermore, the model scores strongly on anti-copying metrics which is
beneficial for the protection of patient data
Data Models for Dataset Drift Controls in Machine Learning With Images
Camera images are ubiquitous in machine learning research. They also play a
central role in the delivery of important services spanning medicine and
environmental surveying. However, the application of machine learning models in
these domains has been limited because of robustness concerns. A primary
failure mode are performance drops due to differences between the training and
deployment data. While there are methods to prospectively validate the
robustness of machine learning models to such dataset drifts, existing
approaches do not account for explicit models of the primary object of
interest: the data. This makes it difficult to create physically faithful drift
test cases or to provide specifications of data models that should be avoided
when deploying a machine learning model. In this study, we demonstrate how
these shortcomings can be overcome by pairing machine learning robustness
validation with physical optics. We examine the role raw sensor data and
differentiable data models can play in controlling performance risks related to
image dataset drift. The findings are distilled into three applications. First,
drift synthesis enables the controlled generation of physically faithful drift
test cases. The experiments presented here show that the average decrease in
model performance is ten to four times less severe than under post-hoc
augmentation testing. Second, the gradient connection between task and data
models allows for drift forensics that can be used to specify
performance-sensitive data models which should be avoided during deployment of
a machine learning model. Third, drift adjustment opens up the possibility for
processing adjustments in the face of drift. This can lead to speed up and
stabilization of classifier training at a margin of up to 20% in validation
accuracy. A guide to access the open code and datasets is available at
https://github.com/aiaudit-org/raw2logit.Comment: LO and MA contributed equall
Dataset similarity to assess semi-supervised learning under distribution mismatch between the labelled and unlabelled datasets
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Semi-supervised deep learning (SSDL) is a popular strategy to leverage unlabelled data for machine learning when labelled data is not readily available. In real-world scenarios, different unlabelled data sources are usually available, with varying degrees of distribution mismatch regarding the labelled datasets. It begs the question which unlabelled dataset to choose for good SSDL outcomes. ftentimes, semantic heuristics are used to match unlabelled data with labelled data. However, a quantitative and systematic approach to this election problem would be preferable. In this work, we first test the SSDL MixMatch algorithm under various distribution mismatch configurations to study the impact on SSDL accuracy. Then, we propose a quantitative unlabelled dataset selection heuristic based on dataset dissimilarity measures. These are designed to systematically assess how distribution mismatch between the labelled and unlabelled datasets affects MixMatch performance. We refer to our proposed method as deep dataset dissimilarity measures (DeDiMs), designed to compare labelled and unlabelled datasets. They use the feature space of a generic Wide-ResNet, can be applied prior to learning, are quick to evaluate and model agnostic. The strong correlation in our tests between MixMatch accuracy and the proposed DeDiMs suggests that this approach can be a good fit for quantitatively ranking different unlabelled datasets prior to SSDL training
Data models for dataset drift controls in machine learning with optical images
Camera images are ubiquitous in machine learning research. They also play a central role
in the delivery of important public services spanning medicine or environmental surveying.
However, the application of machine learning models in these domains has been limited
because of robustness concerns. A primary failure mode are performance drops due to
differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing
approaches do not account for explicit models of machine learning’s primary object of interest:
the data. This limits our ability to study and understand the relationship between data
generation and downstream machine learning model performance in a physically accurate
manner. In this study, we demonstrate how to overcome this limitation by pairing traditional
machine learning with physical optics to obtain explicit and differentiable data models. We
demonstrate how such data models can be constructed for image data and used to control
downstream machine learning model performance related to dataset drift. The findings
are distilled into three applications. First, drift synthesis enables the controlled generation
of physically faithful drift test cases to power model selection and targeted generalization.
Second, the gradient connection between machine learning task model and data model allows
advanced, precise tolerancing of task model sensitivity to changes in the data generation.
These drift forensics can be used to precisely specify the acceptable data environments
in which a task model may be run. Third, drift optimization opens up the possibility to
create drifts that can help the task model learn better faster, effectively optimizing the
data generating process itself to support the downstream machine vision task. This is an
interesting upgrade to existing imaging pipelines which traditionally have been optimized to
be consumed by human users but not machine learning models. The data models require
access to raw sensor images as commonly processed at scale in industry domains such as
microscopy, biomedicine, autonomous vehicles or remote sensing. Alongside the data model
code we release two datasets to the public that we collected as part of this work. In total,
the two datasets, Raw-Microscopy and Raw-Drone, comprise 1,488 scientifically calibrated
reference raw sensor measurements, 8,928 raw intensity variations as well as 17,856 images
processed through twelve data models with different configurations. A guide to access the
open code and datasets is available at https://github.com/aiaudit-org/raw2logit